National Repository of Grey Literature 8 records found  Search took 0.01 seconds. 
Fooling of Algorithms of Computer Vision
Hrabal, Matěj ; Bartl, Vojtěch (referee) ; Herout, Adam (advisor)
The goal of this work was to research existing methods of computer vision and computer recognition fooling. My focus was on group of methods called pixel attacks. Another part of my thesis talks about methods of detecting and fighting against computer vision fooling. Implementation of various pixel attack methods and methods of defending against these kinds of attacks was done using the python programming language and python library Keras. Solution that I have created works as standalone application allowing user to perform various pixel attack methods on chosen image. This tool also allows collection of statistics from performed pixel attacks and is able to detect possible attacks in these images.
Deep Learning for Image Recognition
Munzar, Milan ; Kolář, Martin (referee) ; Hradiš, Michal (advisor)
Neural networks are one of the state-of-the-art models for machine learning today. One may found them in autonomous robot systems, object and speech recognition, prediction and many others AI tasks. The thesis describes this model and its extension which is used in an object recognition. Then explains an application of a convolutional neural networks(CNNs) in an image recognition on Caltech101 and Cifar10 datasets. Using this exemplar application, the thesis discusses and measures efficiency of techniques used in CNNs. Results show that the convolutional networks without advanced extensions are able to reach a 80\% recognition accuracy on Cifar-10 and a 37\% accuracy on Caltech101.
Neural Network Implementation without Multiplication
Slouka, Lukáš ; Baskar, Murali Karthick (referee) ; Szőke, Igor (advisor)
The subject of this thesis is neural network acceleration with the goal of reducing the number of floating point multiplications. The theoretical part of the thesis surveys current trends and methods used in the field of neural network acceleration. However, the focus is on the binarization techniques which allow replacing multiplications with logical operators. The theoretical base is put into practice in two ways. First is the GPU implementation of crucial binary operators in the Tensorflow framework with a performance benchmark. Second is an application of these operators in simple image classifier. Results are certainly encouraging. Implemented operators achieve speed-up by a factor of 2.5 when compared to highly optimized cuBLAS operators. The last chapter compares accuracies achieved by binarized models and their full-precision counterparts on various architectures.
Deep Learning for Image Recognition
Kozel, Michal ; Španěl, Michal (referee) ; Hradiš, Michal (advisor)
Neural networks are currently state-of-the-art technology for speech, image and other recognition tasks. This thesis describes basis properties of neural networks and their learning. The aim of this thesis was to extend Caffe framework with new learning methods and compare their performance on Cifar10 dataset. Namely RMSPROP and normalized SGD
Fooling of Algorithms of Computer Vision
Hrabal, Matěj ; Bartl, Vojtěch (referee) ; Herout, Adam (advisor)
The goal of this work was to research existing methods of computer vision and computer recognition fooling. My focus was on group of methods called pixel attacks. Another part of my thesis talks about methods of detecting and fighting against computer vision fooling. Implementation of various pixel attack methods and methods of defending against these kinds of attacks was done using the python programming language and python library Keras. Solution that I have created works as standalone application allowing user to perform various pixel attack methods on chosen image. This tool also allows collection of statistics from performed pixel attacks and is able to detect possible attacks in these images.
Neural Network Implementation without Multiplication
Slouka, Lukáš ; Baskar, Murali Karthick (referee) ; Szőke, Igor (advisor)
The subject of this thesis is neural network acceleration with the goal of reducing the number of floating point multiplications. The theoretical part of the thesis surveys current trends and methods used in the field of neural network acceleration. However, the focus is on the binarization techniques which allow replacing multiplications with logical operators. The theoretical base is put into practice in two ways. First is the GPU implementation of crucial binary operators in the Tensorflow framework with a performance benchmark. Second is an application of these operators in simple image classifier. Results are certainly encouraging. Implemented operators achieve speed-up by a factor of 2.5 when compared to highly optimized cuBLAS operators. The last chapter compares accuracies achieved by binarized models and their full-precision counterparts on various architectures.
Deep Learning for Image Recognition
Munzar, Milan ; Kolář, Martin (referee) ; Hradiš, Michal (advisor)
Neural networks are one of the state-of-the-art models for machine learning today. One may found them in autonomous robot systems, object and speech recognition, prediction and many others AI tasks. The thesis describes this model and its extension which is used in an object recognition. Then explains an application of a convolutional neural networks(CNNs) in an image recognition on Caltech101 and Cifar10 datasets. Using this exemplar application, the thesis discusses and measures efficiency of techniques used in CNNs. Results show that the convolutional networks without advanced extensions are able to reach a 80\% recognition accuracy on Cifar-10 and a 37\% accuracy on Caltech101.
Deep Learning for Image Recognition
Kozel, Michal ; Španěl, Michal (referee) ; Hradiš, Michal (advisor)
Neural networks are currently state-of-the-art technology for speech, image and other recognition tasks. This thesis describes basis properties of neural networks and their learning. The aim of this thesis was to extend Caffe framework with new learning methods and compare their performance on Cifar10 dataset. Namely RMSPROP and normalized SGD

Interested in being notified about new results for this query?
Subscribe to the RSS feed.